Linear regression models, least-squares problems, normal equations, and stopping criteria for the conjugate gradient method
نویسندگان
چکیده
Minimum-variance unbiased estimates for linear regression models can be obtained by solving leastsquares problems. The conjugate gradient method can be successfully used in solving the symmetric and positive definite normal equations obtained from these least-squares problems. Taking into account the results of Golub and Meurant (1997, 2009) [10,11], Hestenes and Stiefel (1952) [17], and Strakoš and Tichý (2002) [16], which make it possible to approximate the energy norm of the error during the conjugate gradient iterative process, we adapt the stopping criterion introduced by Arioli (2005) [18] to the normal equations taking into account the statistical properties of the underpinning linear regression problem.Moreover, we show how the energy norm of the error is linked to theχ2-distribution and to the Fisher–Snedecor distribution. Finally, we present the results of several numerical tests that experimentally validate the effectiveness of our stopping criteria. Crown Copyright© 2012 Published by Elsevier B.V. All rights reserved.
منابع مشابه
The conjugate gradient regularization method in Computed Tomography problems
In this work we solve inverse problems coming from the area of Computed Tomography by means of regularization methods based on conjugate gradient iterations. We develop a stopping criterion which is eecient for the computation of a regularized solution for the least{squares normal equations. The stopping rule can be suitably applied also to the Tikhonov regularization method. We report computat...
متن کاملA Multilevel Block Incomplete Cholesky Preconditioner for Solving Rectangular Sparse Matrices from Linear Least Squares Problems
An incomplete factorization method for preconditioning symmetric positive definite matrices is introduced to solve normal equations. The normal equations are formed as a means to solve rectangular matrices from linear least squares problems. The procedure is based on a block incomplete Cholesky factorization and a multilevel recursive strategy with an approximate Schur complement matrix formed ...
متن کاملPreconditioning CGNE iteration for inverse problems
The conjugate gradient method applied to the normal equations (cgne) is known as one of the most efficient methods for the solution of (non-symmetric) linear equations. By stopping the iteration according to a discrepancy principle, cgne can be turned into a regularization method. We show that cgne can be accelerated by preconditioning in Hilbert scales, derive (optimal) convergence rates with ...
متن کاملIsoefficiency Analysis of CGLS Algorithms for Parallel Least Squares Problems
In this paper we study the parallelization of CGLS, a basic iterative method for large and sparse least squares problems whose main idea is to organize the computation of conjugate gradient method to normal equations. A performance model called isoeeciency concept is used to analyze the behavior of this method implemented on massively parallel distributed memory computers with two dimensional m...
متن کاملEstimating the Minimal Backward Error in Lsqr
Abstract In this paper we propose practical and efficiently-computable stopping criteria for the iterative solution of large sparse linear least squares (LS) problems. Although we focus our discussion on the algorithm LSQR of Paige and Saunders, many ideas discussed here are also applicable to other conjugate gradients type algorithms. We review why the 2-norm of the projection of the residual ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Computer Physics Communications
دوره 183 شماره
صفحات -
تاریخ انتشار 2012